19 - Metropolis--Hastings [ID:15872]
50 von 192 angezeigt

Hi, so finally you can do something useful which is thinking about our first Markov chain

Monte Carlo method and this one is called the Metropolis-Hastings method.

The Metropolis-Hastings method is one of the solutions to the problem we thought about

last time.

Our goal is to, given a distribution mu which we want to sample from, to construct a Markov

chain such that a sequence of samples obtained from simulating this Markov chain converges

to its invariant density and this invariant density or this invariant measure needs to

be equal to mu.

So that's really complicated.

We're starting with something that we can't sample from.

Now we're supposed to construct a Markov chain such that this Markov chain has this measure

mu as its invariant density.

It sounds completely impossible but it has a surprisingly easy solution and it works

like this.

So the first thing to do is we first construct a different Markov chain, so something else.

So a Markov chain which is not invariant with respect to our measure mu but which is easy

to implement.

Usually this is some kind of random walk.

And with that Markov chain we sample a so-called proposal.

So this is not quite a sample yet.

It might become a sample if we accept it or if we don't accept it.

So this will be something similar to this rejection sampling where we sometimes accept

it and sometimes reject samples.

So what do we do with those proposals?

We add such an accept and reject step and it works like that.

If the proposal is accepted then this proposal becomes a sample and it goes into a list of

samples we already had.

If it's rejected then, no, with rejection sampling we just try it again, again and again

again until we accept it something.

Now with Markov chain Monte Carlo we take the previous sample again and put it in a

list and put it into this list.

So every time we don't accept a proposal the previous sample is copied and used another

time.

So that has the advantage that if we do a thousand steps then we get a thousand samples

no matter what.

With rejection sampling it could very well happen that with a thousand steps we only

got one sample or none at all.

And this modification here, this accept and reject step has to be such that this correction

makes the Markov chain mu invariant.

The next step will be figuring out how to exactly do this accept reject step.

So this Markov chain has some invariant density but that's not important because it's the

wrong invariant density and only adding this accept reject step makes this modified Markov

chain, so the old Markov chain plus accept or rejecting becomes a new Markov chain.

This new Markov chain will then have the correct measure mu as its invariant measure.

A good example is the following.

So we first need to define this proposal Markov chain and notation is difficult here.

So this Markov chain has of course a transition kernel.

So given a value of x n equal to x then the probability that we go to somewhere in some

set A is defined by this proposal kernel Q of x A.

So the probability from going from x from this point x in the set A.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:20:03 Min

Aufnahmedatum

2020-05-14

Hochgeladen am

2020-05-14 23:46:24

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen